Accessibility settings

Published on in Vol 12 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/94594, first published .
Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

Authors of this article:

Charles Verdonk1, 2, 3 Author Orcid Image

1French Armed Forces Biomedical Research Institute, 1 rue du Général Valérie André, Brétigny-sur-Orge, France

2UMR VIFASOM, Université Paris Cité, Paris, France

3Laureate Institute for Brain Research, Tulsa, OK, United States

Corresponding Author:

Charles Verdonk, MD, PhD



In their recent viewpoint, Izquierdo-Condoy et al [1] present a comprehensive analysis of the transformative potential, current applications, and future implications of artificial intelligence (AI) in medical education. They advocate for integrating AI literacy into medical curricula as health care systems become increasingly AI driven. I fully endorse this forward-looking perspective.

However, I would like to extend the discussion by highlighting a complementary yet underexplored issue: the implications of clinical practice in contexts where AI support is unavailable, unreliable, or deliberately restricted. While Izquierdo-Condoy et al [1] analyze the risks associated with integrating AI into medical curricula, less attention is given to the consequences of AI absence following training in AI-rich environments. The authors acknowledge that technological disparities may hinder equitable AI adoption [1], implying that AI-supported care cannot be universally guaranteed.

Emerging empirical evidence suggests that the removal of AI assistance after habitual use may adversely affect cognitive engagement and task performance. In an electroencephalography study, Kosmyna et al [2] reported that reliance on generative AI during essay writing was associated with reduced alpha- and beta-band functional connectivity—interpreted as diminished distributed network engagement—compared with unaided writing. Participants who initially relied on generative AI and were subsequently required to write without AI assistance exhibited persistently reduced connectivity and lower cognitive engagement, alongside poorer memory recall [2]. Behavioral performance decrements following AI withdrawal have also been documented in technical medical tasks, such as digestive endoscopy [3]. More broadly, the medical education literature cautions that when AI substitutes for clinical reasoning (cognitive off-loading) rather than augments it, risks to skill acquisition and retention may emerge [4]. Together, these findings suggest that sustained AI-mediated training may affect broader mechanisms of skill development, with vulnerabilities becoming evident in AI-absent contexts. Although preliminary, this evidence raises the possibility that prolonged AI reliance during training may reshape neurocognitive engagement in ways that have unintended consequences when independent performance is required.

These considerations are particularly salient in high-stakes, resource-constrained environments such as remote or isolated practices, military operations, disaster and humanitarian responses, and other extreme operational settings, where health care professionals’ cognitive performance is particularly strained by stress. In such contexts, the absence of AI support, combined with potential AI-mediated disruption of skill development during training, could compound vulnerability precisely when independent cognitive performance, such as clinical reasoning, is most critical.

This argument does not oppose AI integration. AI will likely enhance efficiency across many health care settings. However, ensuring safe and equitable care may require preserving AI-independent competence. Drawing inspiration from high-reliability industries such as aviation—where automation failure scenarios and minimum unaided practice are standard—medical education could incorporate structured AI-withdrawal exercises and defined thresholds of unaided proficiency [5]. In parallel, longitudinal research is needed to determine whether AI-mediated training produces durable changes in brain function and clinical performance.

As medical education accelerates toward AI integration, preparing physicians to practice both with and without AI support may represent a safeguard for patient safety and professional autonomy.

Acknowledgments

Portions of this manuscript were edited with assistance from OpenAI’s GPT-5.2 model. The tool was used exclusively to improve clarity, language, and organization, and to align the text with journal-specific formatting and style requirements. No artificial intelligence system was used to generate original scientific ideas, data interpretation, or conclusions. All intellectual content, analyses, and arguments were developed and verified solely by the author, who assumes full responsibility for the integrity, accuracy, and originality of the manuscript.

Funding

The author declared that no financial support was received for this work.

Disclaimer

The opinions and assertions expressed herein are those of the author and do not necessarily reflect the official views of the institutions with which the author is affiliated.

Conflicts of Interest

None declared.

  1. Izquierdo-Condoy JS, Arias-Intriago M, Montero Corrales L, Ortiz-Prado E. Artificial intelligence in medical education: transformative potential, current applications, and future implications. JMIR Med Educ. Feb 17, 2026;12(1):e77127. [CrossRef] [Medline]
  2. Kosmyna N, Hauptmann E, Yuan YT, et al. Your brain on ChatGPT: accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv. Preprint posted online on Jun 10, 2025. [CrossRef]
  3. Ho JCL, Qian Z, Lau LHS, Yip HC, Chiu PWY. Artificial intelligence in digestive endoscopy training-the past, present, and future. Dig Endosc. Jan 2026;38(1):e70047. [CrossRef] [Medline]
  4. Abdulnour REE, Gin B, Boscardin CK. Educational strategies for clinical supervision of artificial intelligence use. N Engl J Med. Aug 21, 2025;393(8):786-797. [CrossRef] [Medline]
  5. Ong AY, Merle DA, Pollreisz A, et al. Flight rules for clinical AI: lessons from aviation for human-AI collaboration in medicine. NPJ Digit Med. Jan 31, 2026;9(1):201. [CrossRef] [Medline]


AI: artificial intelligence


Edited by Tiffany Leung; This is a non–peer-reviewed article. submitted 03.Mar.2026; accepted 14.Mar.2026; published 31.Mar.2026.

Copyright

© Charles Verdonk. Originally published in JMIR Medical Education (https://mededu.jmir.org), 31.Mar.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.